Today I’m going to tell you about something we lost along the way.
I’m going to tell you about how the future of computers is buried in our past.
There is an elephant in the room — ubiquitous, persistent, and poorly handled — and we’ve ignored it for a good 80 years.
It’s a variation of the old “Pay no attention to the man behind the curtain” from The Wizard of Oz. Except we’re going to start with a woman.
Meet Barbara Canright.
Once upon a time, but not too long ago, computers breathed, ate, slept, and pushed pencils. Computers were people. Mrs. Canright was one of the first.
She was instrumental in early work at NASA’s Jet Propulsion Laboratory back before NASA technically existed and long before mechanical, then digital, computers and sensors were considered possible.
Mrs. Canright and her peers had two advantages back when women were begrudgingly accepted in the workforce during the war years — processing speed and data quality.
They lost the first — speed — before long. The return of World War II veterans to the job market would have pushed them out anyways, but that’s a whole other thing.
We’re dealing with the fallout from abandoning the second advantage, data quality, today.
Computers since then have largely exceeded all predictions for how good they are at crunching numbers. They were stupidly expensive to maintain, and labor costs — especially for overeducated women hitting a really low glass ceiling — were cheap. That quickly flipped.
Mrs. Canright did complex physics, after all. Few — maybe less — can still do this work to this day, but there is a right and a wrong answer in the end. We don’t need pencil pushers to get there anymore. We haven’t for half a century.
Instead, we need to use data better. What we need is help asking the right question before we start crunching the data. What Mrs. Canright could do that a mainframe that cost millions couldn’t was evaluate data before it was presented to anyone else.
That distinction was lost along the way, but no longer.
Data is the one thing we most certainly have. Growth in what we collectively have saved somewhere is surging far beyond how we can use it.
The new need isn’t about how quickly you can get answer to a question. It’s about what questions you can ask while still getting a meaningful answer.
What’s the meaning to life? 42. Douglas Adams was right about one thing — if you build the biggest computer in the universe but phrase the question wrong, you’ll get an answer you can’t use. Garbage in, garbage out.
Mrs. Canright couldn’t ever compete for speed, but quality? That’s another issue, and one that we’ve poorly addressed since her time.
That is poised to change, though. Not the meaning of life thing, clearly, but our ability to use computers to process data in meaningful ways when we do a poor job of making sure the data is pristine for what we try to use it for.
Our current technology is at its absolute physical limits. Chips are now so small that we’re talking about nanometers of distance between circuits on them. You can only cram so much into a certain amount of space. A good human brain used to be the gold standard there; now we have to worry about the quantum behavior of electrons — the “spooky action” that even Einstein didn’t want to address.
What do the results serve? The microprocessors we use only answer the simplest of questions. Yes or no? 1 or 0?
We can’t train computers to tell us when we ask a bad question, but can they adapt to provide a range of answers and probabilities? Yes, but it requires a whole lot more from them than we’ve been able to ask to date.
Transistors on the latest generation of microprocessors are as tiny as 5 nanometers. That’s the size of about 10 silicon atoms. It’s about 1,600 times smaller than a human red blood cell.
The push to shrink silicon chips with copper-based circuits has been enough until now. It no longer will suffice.
What we need is a whole new level of capacity. We need the capacity to ask questions that can’t be easily broken down into simple yes or no calculations that we run once. We need thousands and millions and billions of iterations of the same big data number crunches.
We need computers that consistently compare results, correct errors, and refine output. Humans were great at this until we got pushed out of the game. We need to regain this capacity.
People are incredibly good at that at our own time scales, but that’s no good anymore. Computers that are billions of times faster? Well, they could be, if only we made them that way instead of sacrificing data quality as a priority in Mrs. Canright’s day for the sake of saving money.
That is coming full circle, though. What she and other human computers could do — push out tough computations and guarantee accuracy — is becoming possible again. More importantly, it is becoming economical.
We lost something along the way out of a need for efficiency and savings. A BIG something. We are finally on the cusp of reclaiming that potential.